Talk by Alex Huth, of the Gallant lab at UC Berkeley. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
Abstract Human beings have the unique ability to extract the meaning, or semantic content, from spoken language. Yet little is known about how the semantic content of everyday narrative speech is represented in brain. We used a new fMRI-based approach to show that semantic information is represented in complex cortical maps that are highly consistent across subjects. Using BOLD data collected while subjects listened to several hours of natural narrative stories, we constructed voxel-wise semantic regression models that accurately predict BOLD responses based on semantic features extracted from the stories. These semantic features were defined using a statistical word co-occurrence model. We then used a novel Bayesian generative model of cortical maps to discover how the representations revealed by voxel-wise modeling are organized across the cortical sheet. The results of these analyses show that the semantic content of narrative speech is represented across parietal cortex, prefrontal cortex, and temporal cortex in complex maps comprising dozens of semantically selective brain areas.